364 research outputs found

    Manners and method in classical criticism of the early eighteenth century

    Get PDF
    This article explores a neglected period in the history of classical scholarship: the first decades of the eighteenth century. It focuses on the tension between an evolving idea of method, and the tradition of personal polemic which had been an important part of the culture of scholarship since the Renaissance. There are two case studies: the conflict between Jean Le Clerc and Pieter Burman, and the controversy that followed Richard Bentley's edition of Horace's Odes. Both demonstrate the need to revise current paradigms for writing the history of scholarship, and invite us to reconsider the role of methodology in producing of scholarly authority

    Non-negative mixed finite element formulations for a tensorial diffusion equation

    Full text link
    We consider the tensorial diffusion equation, and address the discrete maximum-minimum principle of mixed finite element formulations. In particular, we address non-negative solutions (which is a special case of the maximum-minimum principle) of mixed finite element formulations. The discrete maximum-minimum principle is the discrete version of the maximum-minimum principle. In this paper we present two non-negative mixed finite element formulations for tensorial diffusion equations based on constrained optimization techniques (in particular, quadratic programming). These proposed mixed formulations produce non-negative numerical solutions on arbitrary meshes for low-order (i.e., linear, bilinear and trilinear) finite elements. The first formulation is based on the Raviart-Thomas spaces, and is obtained by adding a non-negative constraint to the variational statement of the Raviart-Thomas formulation. The second non-negative formulation based on the variational multiscale formulation. For the former formulation we comment on the affect of adding the non-negative constraint on the local mass balance property of the Raviart-Thomas formulation. We also study the performance of the active set strategy for solving the resulting constrained optimization problems. The overall performance of the proposed formulation is illustrated on three canonical test problems.Comment: 40 pages using amsart style file, and 15 figure

    Where are the missing gamma ray burst redshifts?

    Full text link
    In the redshift range z = 0-1, the gamma ray burst (GRB) redshift distribution should increase rapidly because of increasing differential volume sizes and strong evolution in the star formation rate. This feature is not observed in the Swift redshift distribution and to account for this discrepancy, a dominant bias, independent of the Swift sensitivity, is required. Furthermore, despite rapid localization, about 40-50% of Swift and pre-Swift GRBs do not have a measured redshift. We employ a heuristic technique to extract this redshift bias using 66 GRBs localized by Swift with redshifts determined from absorption or emission spectroscopy. For the Swift and HETE+BeppoSAX redshift distributions, the best model fit to the bias in z < 1 implies that if GRB rate evolution follows the SFR, the bias cancels this rate increase. We find that the same bias is affecting both Swift and HETE+BeppoSAX measurements similarly in z < 1. Using a bias model constrained at a 98% KS probability, we find that 72% of GRBs in z < 2 will not have measurable redshifts and about 55% in z > 2. To achieve this high KS probability requires increasing the GRB rate density in small z compared to the high-z rate. This provides further evidence for a low-luminosity population of GRBs that are observed in only a small volume because of their faintness.Comment: 5 pages, submitted to MNRA

    An anisotropic mesh adaptation method for the finite element solution of heterogeneous anisotropic diffusion problems

    Full text link
    Heterogeneous anisotropic diffusion problems arise in the various areas of science and engineering including plasma physics, petroleum engineering, and image processing. Standard numerical methods can produce spurious oscillations when they are used to solve those problems. A common approach to avoid this difficulty is to design a proper numerical scheme and/or a proper mesh so that the numerical solution validates the discrete counterpart (DMP) of the maximum principle satisfied by the continuous solution. A well known mesh condition for the DMP satisfaction by the linear finite element solution of isotropic diffusion problems is the non-obtuse angle condition that requires the dihedral angles of mesh elements to be non-obtuse. In this paper, a generalization of the condition, the so-called anisotropic non-obtuse angle condition, is developed for the finite element solution of heterogeneous anisotropic diffusion problems. The new condition is essentially the same as the existing one except that the dihedral angles are now measured in a metric depending on the diffusion matrix of the underlying problem. Several variants of the new condition are obtained. Based on one of them, two metric tensors for use in anisotropic mesh generation are developed to account for DMP satisfaction and the combination of DMP satisfaction and mesh adaptivity. Numerical examples are given to demonstrate the features of the linear finite element method for anisotropic meshes generated with the metric tensors.Comment: 34 page

    The cutoff method for the numerical computation of nonnegative solutions of parabolic PDEs with application to anisotropic diffusion and lubrication-type equations

    Full text link
    The cutoff method, which cuts off the values of a function less than a given number, is studied for the numerical computation of nonnegative solutions of parabolic partial differential equations. A convergence analysis is given for a broad class of finite difference methods combined with cutoff for linear parabolic equations. Two applications are investigated, linear anisotropic diffusion problems satisfying the setting of the convergence analysis and nonlinear lubrication-type equations for which it is unclear if the convergence analysis applies. The numerical results are shown to be consistent with the theory and in good agreement with existing results in the literature. The convergence analysis and applications demonstrate that the cutoff method is an effective tool for use in the computation of nonnegative solutions. Cutoff can also be used with other discretization methods such as collocation, finite volume, finite element, and spectral methods and for the computation of positive solutions.Comment: 19 pages, 41 figure

    EAGLE multi-object AO concept study for the E-ELT

    Full text link
    EAGLE is the multi-object, spatially-resolved, near-IR spectrograph instrument concept for the E-ELT, relying on a distributed Adaptive Optics, so-called Multi Object Adaptive Optics. This paper presents the results of a phase A study. Using 84x84 actuator deformable mirrors, the performed analysis demonstrates that 6 laser guide stars and up to 5 natural guide stars of magnitude R<17, picked-up in a 7.3' diameter patrol field of view, allow us to obtain an overall performance in terms of Ensquared Energy of 35% in a 75x75 mas^2 spaxel at H band, whatever the target direction in the centred 5' science field for median seeing conditions. The computed sky coverage at galactic latitudes |b|~60 is close to 90%.Comment: 6 pages, to appear in the proceedings of the AO4ELT conference, held in Paris, 22-26 June 200

    Probing the low-luminosity GRB population with new generation satellite detectors

    Full text link
    We compare the detection rates and redshift distributions of low-luminosity (LL) GRBs localized by Swift with those expected to be observed by the new generation satellite detectors on GLAST (now Fermi) and, in future, EXIST. Although the GLAST burst telescope will be less sensitive than Swift's in the 15--150 keV band, its large field-of-view implies that it will double Swift's detection rate of LL bursts. We show that Swift, GLAST and EXIST should detect about 1, 2 & 30 LL GRBs, respectively, over a 5-year operational period. The burst telescope on EXIST should detect LL GRBs at a rate of more than an order of magnitude greater than that of Swift's BAT. We show that the detection horizon for LL GRBs will be extended from z0.4z \simeq 0.4 for Swift to z1.1z \simeq 1.1 in the EXIST era. Also, the contribution of LL bursts to the observed GRB redshift distribution will contribute to an identifiable feature in the distribution at z1z \simeq 1.Comment: 6 pages, 4 figures, accepted by MNRA

    Enforcing the non-negativity constraint and maximum principles for diffusion with decay on general computational grids

    Full text link
    In this paper, we consider anisotropic diffusion with decay, and the diffusivity coefficient to be a second-order symmetric and positive definite tensor. It is well-known that this particular equation is a second-order elliptic equation, and satisfies a maximum principle under certain regularity assumptions. However, the finite element implementation of the classical Galerkin formulation for both anisotropic and isotropic diffusion with decay does not respect the maximum principle. We first show that the numerical accuracy of the classical Galerkin formulation deteriorates dramatically with increase in the decay coefficient for isotropic medium and violates the discrete maximum principle. However, in the case of isotropic medium, the extent of violation decreases with mesh refinement. We then show that, in the case of anisotropic medium, the classical Galerkin formulation for anisotropic diffusion with decay violates the discrete maximum principle even at lower values of decay coefficient and does not vanish with mesh refinement. We then present a methodology for enforcing maximum principles under the classical Galerkin formulation for anisotropic diffusion with decay on general computational grids using optimization techniques. Representative numerical results (which take into account anisotropy and heterogeneity) are presented to illustrate the performance of the proposed formulation

    Fluid-structure interaction in blood flow capturing non-zero longitudinal structure displacement

    Full text link
    We present a new model and a novel loosely coupled partitioned numerical scheme modeling fluid-structure interaction (FSI) in blood flow allowing non-zero longitudinal displacement. Arterial walls are modeled by a {linearly viscoelastic, cylindrical Koiter shell model capturing both radial and longitudinal displacement}. Fluid flow is modeled by the Navier-Stokes equations for an incompressible, viscous fluid. The two are fully coupled via kinematic and dynamic coupling conditions. Our numerical scheme is based on a new modified Lie operator splitting that decouples the fluid and structure sub-problems in a way that leads to a loosely coupled scheme which is {unconditionally} stable. This was achieved by a clever use of the kinematic coupling condition at the fluid and structure sub-problems, leading to an implicit coupling between the fluid and structure velocities. The proposed scheme is a modification of the recently introduced "kinematically coupled scheme" for which the newly proposed modified Lie splitting significantly increases the accuracy. The performance and accuracy of the scheme were studied on a couple of instructive examples including a comparison with a monolithic scheme. It was shown that the accuracy of our scheme was comparable to that of the monolithic scheme, while our scheme retains all the main advantages of partitioned schemes, such as modularity, simple implementation, and low computational costs
    corecore